Self-Organized Criticality

Self-organized criticality in neural networks from activity-based-rewiring

Let $N=( \mathbf{a}, \mathbf{\bar{a}}, C )$ be a $n$-node network where

At every time step:

Parameters:

update function

keras layer

Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network

The self-organizing recurrent network (SORN) consists of excitatory $x\in\mathbb{R}^{N_E}$ and inhibitory $y\in\mathbb{R}^{N_I}$ neurons ($N_I = 0.2 \times N_E$) connected in a system of boolean update equations with fast and slow acting responses

Fast Dynamics

Fast-acting behavior affects both excitatory $x$ and inhibitory $y$ neurons by

$$x(t+1) = \Theta \biggl [ W^{EE}(t)x(t) - W^{EI}(t)y(t) + u^{ext}(t) + \xi^E(t) - T^E(t) \biggr ]$$$$y(t+1) = \Theta \biggl [ W^{IE}(t)x(t) + \xi^I(t) - T^I(t) \biggr ]$$

Slow Adaptations

  1. Spike Timing Dependant Plasticity (STDP) makes synapse $w^{EE}_{ij}$ decrease when $i$ activates before $j$ (since the connection must've not been important) and increase when $i$ activate after $j$ (since the connection made a significant contribution). It has no effect when one of the values is $0$: $$\Delta w^{EE}_{ij} = \eta_{STDP}[x_i(t)x_j(t-1) - x_j(t)x_i(t-1)]$$
  1. Inhibitory Spike Timing Dependant Plasticity (iSTDP) increases (decreases) the strength of existing ($w^{EI}_{ij}>0$) inhibitory to excitatory connections when they are unsucessful (sucessful) at inhibiting their downstream excitatory neurons. Inhibitory synaptic growth is (target firing rate $\mu_{IP}$ times) slower to strengthen synapses than weaken them: $$\begin{align*} \Delta w^{EI}_{ij} =& \frac{\eta_{inh}}{\mu_{IP}}x_i(t)y_j(t-1) - \eta_{inh}(1-x_i(t))y_j(t-1) \\ =& -\eta_{inh}y_j(t-1)[1-x_i(t)(1+1/\mu_{IP})] \end{align*}$$
  1. Synaptic normalization normalizes input for $W^{EE}$ and $W^{EI}$ to $1$: $$W^{EE/EI}_{ij} \leftarrow W^{EE/EI}_{ij} / \sum_j W^{EE/EI}_{ij}$$
  1. Structural Plasticity (SP) randomly adds small values synapses ($\eta_{SP}=0.001$) between unconnected neurons by Bernoili probability factor $p_{SP}$ which scales quatratically with respect to the number of nodes $p_{SP}(N^E) = \frac{N^E(N^E-1)}{200(199)}p_{SP}(N^E=200)$ with the base case $p_{SP}(N^E=200) = 0.1$. $$w^{EE/EI}_{ij} \leftarrow \eta_{SP}\mathcal{B}(p_{SP}) \Leftrightarrow w^{EE/EI}_{ij} = 0$$
  1. Intrinsic plasticity (IP) homeostatically adapts excitatory firing thresholds to maintain a mean firing rate $H_{IP} \sim \mathcal{N}(\mu_{IP}=0.1, \sigma_{IP}^2 = 0)$ $$\Delta T^{E} = \eta_{IP}[x(t) - H_{IP}]$$

Negative ($x<0$), large ($x>1$), and null weights are pruned after each update step. Also remove any self connections that may have been formed

Putting it all together

Experiments

Playground

I want to try

First test
Long test
Fixed numerical stability problem
Testing for power laws

Testing Machine

No Input

Random Input

Structured Input

Notes

Is learning maximized when internal dynamics are already at the edge of criticality?

Background Intelligence is a scarse resource and must be utilized effectively to maximize growth. Intrinsically motivated learning can only select a subset of information from the environment to send through the bottleneck of interaction. However the human's subjective measure of information is not equal to information theory's negative log likelihood. Instead the brain acquires maximum information about its environment from stimuli that are neither overly boring nor excessively surprising. Rather, it is critical information which varies in-between either extreme that maximally attracts attention. For example, my intrinsic motivation to learn gravitates toward material somewhere in-between the high-school and post-docoral levels.

(Del Papa et al. 2017) confirmed structured input temporarily breaks down critical dynamics. However, power laws reliably return after a period of readaptation.

Observations

Conclusion I cannot make any conclusion of certainty from the above experiments. Criticality will continue to be a focus of study.